Psychonomic Bulletin & Review
○ Springer Science and Business Media LLC
Preprints posted in the last 90 days, ranked by how well they match Psychonomic Bulletin & Review's content profile, based on 14 papers previously published here. The average preprint has a 0.00% match score for this journal, so anything above that is already an above-average fit.
Tian, K. J.; Motzer, J. A.; Denison, R. N.
Show abstract
When successive stimuli occur close enough together in time, their perception can be impaired. Such impairments indicate temporal competition between successive stimuli for representational resources. Voluntary temporal attention can bias processing resources in favor of a behaviorally relevant moment, improving perception at the attended time at the expense of impairments at unattended times. However it is unclear whether these perceptual tradeoffs across time arise because voluntary temporal attention selects among actively competing stimulus representations, such as within visual working memory, or if instead, temporal attention facilitates stimulus processing prior to a competitive stage. Here we used a temporal cueing task with up to two targets in succession to test whether and how the effects of temporal attention depend on temporal competition. We found that voluntary temporal attention improved performance even in the absence of temporal competition, when only one stimulus appeared during the trial. Moreover, the magnitude of attentional enhancement was comparable with and without competition. These results suggest that voluntary temporal attention enhances perception by facilitating processing prior to a competitive stage, rather than by resolving conflicts between actively competing stimulus representations. Graphical abstract O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=126 SRC="FIGDIR/small/705419v2_ufig1.gif" ALT="Figure 1"> View larger version (20K): org.highwire.dtl.DTLVardef@1f1f8dforg.highwire.dtl.DTLVardef@10a33f1org.highwire.dtl.DTLVardef@d81cfborg.highwire.dtl.DTLVardef@56a432_HPS_FORMAT_FIGEXP M_FIG C_FIG
Zylberberg, A.
Show abstract
The ability to evaluate ones own knowledge states is often studied using paradigms in which participants make a decision and subsequently report their confidence. This structure has motivated hierarchical models in which confidence arises from a metacognitive process, distinct from the decision process itself, that estimates the probability that the choice is correct (Meyniel et al., 2015; Pouget et al., 2016; Fleming and Daw, 2017). Here, we contrast this framework with an alternative based on an intentional architecture (Shadlen et al., 2008). In this account, choice and confidence are determined simultaneously through a multidimensional drift-diffusion process, where each dimension represents one choice-confidence combination (Ratcliff and Starns, 2009, 2013). Choice, response time, and confidence jointly emerge when one of these accumulators reaches a decision bound. To adjudicate between these accounts, we fit both models to behavioral data from two perceptual tasks: a random-dots motion discrimination task with incentivized confidence reports, and a luminance discrimination task without feedback or incentives. The integrated model provided a superior fit for the incentivized motion task, whereas the hierarchical model more accurately captured behavior in the un-incentivized luminance task. These results suggest that confidence does not rely on a single computational mechanism, but rather its implementation may adapt to the specific demands and structure of the task.
Eicke-Kanani, L.; Tatai, F.; Rosenberger, L.; Schmitter, C.; Straube, B.; Wallis, T. S.
Show abstract
Michottes "launching displays" are animations of collision-like interactions between two objects that elicit a stable and robust impression that one object, the launcher, caused another object, the target, to move. Although it is well-known that unexpected disruptions of movement continuation between launcher and target decrease causal impressions in centre-to-centre collisions, the role of observers visual uncertainty around predicted moving trajectories remains relatively unexplored. In this work, we (1) assess observers uncertainty around post-collision moving angles in a trajectory prediction task and (2) collect their causal impression in a causality rating task. In the latter task, observers viewed centre-to-centre collisions with different levels of movement continuity between the launcher and the target disc. By presenting different launch orientations, we exploited the well-known oblique effect to vary trajectory prediction uncertainty within individuals. If observers rely on their trajectory predictions to rate the causality of the collision, we expect their accuracy in (1) to have a systematic influence on their causality rating in (2). We replicate previous findings that observers report stronger causal impressions in trials where the target and the launcher move in the same direction and weaker causal impressions for collisions where the target and the launcher moving trajectory deviated. Furthermore, causality ratings were on average higher for oblique compared to cardinal launch directions, implying that increased sensory uncertainty induces a stronger causal impression. We hope this work will inspire deeper empirical assessments and computational models describing the role of sensory uncertainty and predictive processes in shaping subjective impressions of causality.
Rawal, A.; Wolff, M. J.; Rademaker, R. L.
Show abstract
Visual working memory allows for the brief maintenance of information to serve behavioral goals. It has been shown that when the specific action required to serve a future goal is predictable, people can flexibly change a visual memory representation to incorporate an action-based one, demonstrating the goal-oriented nature of visual working memory. Can such flexibility also be observed within the visual domain, between color and space? In this eye-tracking study, participants remembered either a centrally presented color or a spatial position around fixation. Critically, when remembering a color the response wheel was either randomly rotated, or shown at a fixed rotation, on every trial. When fixed, every target color could be associated with a predictable position on the wheel during response. Do people incorporate this added spatial information in their behavior? Participants utilized color-space associations when remembering color: Response initiation happened faster when the color wheel was fixed compared to random, irrespective of whether an action could be planned or not. Next, we showed that gaze was biased towards the position of the spatial memory target during the delay, extending previous work on gaze biases. Importantly, also when remembering a color, gaze was biased towards the anticipated position of that color on the response wheel when it was fixed. Together, our results show a behavioral benefit of added spatial information for color memory, and systematic changes in gaze that reflect flexible utilization of space.
Guerra, S.; Roccato, M.; Oletto, C. M.; Ghiani, A.; Bertamini, M.; Battaglini, L.
Show abstract
Plant Awareness Disparity (PAD) refers to the inability of humans to notice plants and recognize their importance. Among the various factors (e.g., cultural) contributing to PAD, the less prominent visual cues of plants (e.g., color) might be one of the main features making them less noticeable to human perception. Here, we investigated whether PAD affects basic numerosity perception, which represents a fundamental cognitive ability that allows individuals to interpret and interact with their surroundings. Across three experiments, we compared how participants perceive the numerosity of plants (specifically trees), animals, and minerals. Participants completed two tasks: an estimation task, in which they reported the exact number of items in a single set and a comparison task, which required them to discriminate numerosity between two sets of items. In Experiment 1, both tasks employed colored images. We hypothesized that participants would underestimate the number of plant items in comparison to animals and minerals, given that plant stimuli typically attract less attention. In Experiment 2, black and white images were used to test whether the green color of plants contributes to PAD. In Experiment 3, all items were rotated of 180{degrees} to disrupt semantic recognition and assess whether PAD arises from higher-level cognitive processes. Results revealed a consistent underestimation of plants in Experiment 1 and 2, but this effect diminished in Experiment 3. The reduction of this effect suggests that semantic recognition processes may contribute to PAD. These results highlight how cognitive biases toward plants can influence basic perceptual judgments essential for everyday functioning.
Seo, S.; Lee, S.; Lee, N.; Kim, S.-P.
Show abstract
Choice overload occurs when an ever-growing number of options impairs decision quality, because evaluating options taxes cognitive resources. We investigated whether reducing cognitive demand could mitigate overload by encouraging greater cognitive effort to achieve optimal choice. We conducted two experiments manipulating cognitive demand in complementary ways: Experiment 1 reduced demand by presenting high-attractiveness sets, and Experiment 2 did so by providing a shortlist tool. In both experiments, participants chose from sets of 6-24 options while their eye-gaze and electroencephalographic (EEG) data were recorded. We found that reducing demand made decisions faster, but did not improve choice performance as set-size increased. Under low-demand conditions, eye-gaze measures revealed narrower search and EEG measures showed reduced working memory engagement per option, together indicating less searching and processing efforts. These results suggest that even with reduced cognitive demand, people coast through easier decisions, conserving effort and leaving the choice overload effect largely intact.
Zheng, Y.; Chen, L.
Show abstract
Perceptual processing integrates information from multiple sensory modalities to form a coherent representation of the environment. A classic example of such is the Sound-Induced Flash Illusion (SIFI), where the perceived number of visual flashes is altered by conflicting auditory stimuli. While the SIFI is a well-established phenomenon of multisensory integration, the influence of physical spatial characteristics--specifically stimulus eccentricity and spatial congruence--on integration levels remains debated.To address this gap, this study used the SIFI paradigm to investigate the effect of visual stimulus spatial location and the spatial congruence between auditory and visual stimuli on audiovisual integration. In Experiments 1 and 2, we found that when spatial attention was controlled via cueing, unimodal visual performance remained consistent across locations. However, the susceptibility to SIFI increased progressively from the central to the peripheral visual field, exhibiting a spatial pattern of Gaussian distribution. Bayesian modeling further supported this by showing that this spatial modulation was driven by an increase in the integration weight assigned audiovisual representations in the periphery, rather than changes in sensory uncertainty alone. Conversely, Experiment 3 demonstrated that the spatial congruence of audiovisual stimuli did not affect the SIFI or alter the integration processing. These findings refine our current understanding of the spatial modulation upon audiovisual integration. By incorporating the visual systems spatial properties into a Bayesian framework, we provide a computational explanation for the eccentricity-dependent nature of multisensory integration.
Nakamura, A.; Luo, J.; Yokoi, I.; Takemura, H.
Show abstract
Visual perception of symbolic numerals is essential for everyday tasks; however, the neural and perceptual mechanisms underlying this ability remain unclear. Partially occluded digital numerals can elicit bistable perception, and adaptation to symbolic numerals alters the perception of these ambiguous stimuli. We aimed to examine how symbolic numeral adaptation is related to hierarchical visual processing by testing its interocular and interhemifield transfer. Experiment 1 tested interocular transfer by presenting the test stimulus to either the same or opposite eye as the adaptation stimulus. Experiment 2 assessed interhemifield transfer by presenting the test stimulus to either the same or opposite hemifield as the adaptation stimulus. Experiment 3 examined the interhemifield transfer of adaptation confined to the upper parts of digital numerals. Our results showed that adaptation to digital numerals induced shifted perceptual interpretations that transferred across eyes. In addition, we found that adaptation to digital numerals induced a relatively small but statistically significant interhemifield transfer. In contrast, adaptation restricted to the upper parts of digital numerals showed no significant interhemifield transfer. These findings suggest that the perceptual interpretation of symbolic numerals involves visual processing stages that integrate information across the eyes and hemifields.
Ekinci, M. A.; Kaiser, D.
Show abstract
When individuals view the same visual input, they often differ in their aesthetic appeal judgments, yet why people differ remains largely unclear. Here, we tested whether individual differences in aesthetic experience are linked to differences in visual exploration. In two experiments, participants watched the documentary "Home" while their eye movements were recorded. In Experiment 1, participants continuously rated aesthetic experience throughout the movie, whereas in Experiment 2, they watched the first half without a task and rated aesthetic experience only during the second half. Inter-individual similarity in gaze patterns, assessed using fixation heatmaps across time, predicted similarity in aesthetic appeal judgments in both experiments. Notably, in Experiment 2, gaze similarity during free viewing in the first half of the movie predicted similarity in aesthetic ratings during the second half, indicating that incidental eye movement patterns predict aesthetic experiences. Together, these results show that shared gaze patterns are linked to shared aesthetic experiences under naturalistic, dynamic viewing conditions.
Quirmbach, F.; Helmert, J. R.; Pannasch, S.; Dix, A.; Limanowski, J.
Show abstract
For eye-hand coordination, predictions of sensory movement consequences may already be issued, and adjusted, during action preparation. In this pre-registered study, we combined a delayed-movement paradigm with a virtual reality-based hand-eye tracking task to investigate the oculomotor correlates of planning and executing coordinated hand-eye movements under standard vs nonstandard visual hand movement feedback. We measured pupil dilation and gaze-hand tracking during action preparation and subsequent task execution, where visual movement feedback violated or matched cued expectations: Participants prepared and, after a delay period, executed hand movements. Their movements were reflected by congruent or incongruent (inverted) movements of a glove-controlled virtual hand model, which they had to follow with their gaze. In the preceding delay period, visual cues could specify the to-be-executed movement (or leave it unspecified), and the visuomotor mapping (congruent or incongruent, 75% cue validity). We found that during the delay, pupil diameter increased more strongly when the movement was pre-cued (compared to left unspecified), and when nonstandard compared to standard visual movement feedback was expected. During execution, gaze-hand tracking performance decreased under nonstandard mappings, but significantly less so when the to-be-executed movement was pre-cued. Expectation violation trials produced a strong pupil dilation, particularly when congruent (standard) visuomotor expectations were violated, but also when incongruent mappings were cued but congruent ones observed. Furthermore, expectation violation impaired tracking performance; again, stronger for pre-cued movements with standard mapping. Our results indicate that oculomotor responses during delay encode processes related to motor planning and flexible forward prediction of sensory action consequences ahead of execution, i.e. increased mental effort and expectations of sensory conflict. Moreover, the results demonstrate that the strength of these (updated) predictions affects eye-hand coordination and pupillary responses during subsequent execution of the planned action.
Kalburge, I.; Dallstream, A.; Josic, K.; Kilpatrick, Z. P.; Ding, L.; Gold, J. I.
Show abstract
Decisions based on evidence accumulated over time require rules governing when to end the accumulation process and commit to a choice. These rules control inherent trade-offs between decision speed and accuracy, which require careful balance to maximize quantities that depend on both like reward rate. We previously showed that, to maximize reward rate, normative decision rules adapt to changing task conditions (Barendregt et al., 2022). Here we used a novel task to examine whether and how people use adaptive rules for individual decisions under a variety of conditions, including changes in decision outcomes across trials and changes in evidence quality both across and within trials. We found that the participants tended to use rules that adjusted, at least partially, to predictable changes in task conditions to improve reward rate, consistent with a rationally bounded implementation of normative principles. These findings help inform our understanding of the extent and limits of flexible decision formation in the brain.
Nagisa, S.; Oblak, E.; Shimojo, S.; Shibata, K.
Show abstract
Multitasking is generally regarded as detrimental to performance. This deterioration effect is typically explained by the interference among tasks due to the limited capacity of information-processing resources, which in turn reduces the performance in each task. Contrary to this general view, we report evidence for a facilitation effect of multitasking on performance. This facilitation effect was observed in multitasking on a handgrip muscular endurance task and cognitive task, which are known to have little interference with each other. Specifically, we found that performance in the endurance task was facilitated with the difficulty of the concurrent cognitive task. This facilitation effect was mediated by additional pupil dilation due to the cognitive task. Increased effort with the difficulty of the cognitive task cannot explain the facilitated performance in the irrelevant endurance task. Instead, they suggest that the cognitive task elevated overall arousal to a level unattainable by the endurance task alone, which in turn facilitated performance in the irrelevant endurance task. To further test this arousal account, we manipulated participants motivation to the cognitive task by reward without changing its difficulty and found the same pattern of results. Thus, it is not effort or motivation specific to the cognitive task but rather overall arousal level that underlies the facilitation effect. These results unveiled a previously overlooked mechanism: a multitasking-induced arousal boost. Our findings suggest that multitasking can facilitate performance when the net effect of adding a concurrent task is governed less by the capacity limitation and more by the elevation of overall arousal.
Rodriguez-San Esteban, P.; Capizzi, M.; Gonzalez-Lopez, J. A.; Chica, A. B.
Show abstract
Can we rescue a percept that would otherwise be processed non-consciously? While pre-stimulus alerting is known to facilitate conscious access, the effects of retro-cues remain ambiguous due to methodological confounds in existing literature. Specifically, most studies finding retro-cue benefits have relied on spatial features (such as lateralized targets or cues) which confound alerting with spatial selection. Our design addresses this gap by employing central visual targets and non-lateralized auditory cues, thereby isolating the temporal boost of phasic alerting from spatial orienting. Across four experiments, participants reported the presence and orientation of a central Gabor patch presented at near-threshold ([~]50% detection) or higher visibility ([~]75% detection) levels. An auditory alerting tone was presented prior, simultaneously or after the Gabor, at various short and long stimulus onset asynchronies, with both short and long temporal ranges. Results consistently showed that pre-stimulus and simultaneous cues significantly enhanced conscious perception, increasing both seen rates and (in some experiments) perceptual sensitivity. Crucially, the effectiveness of retro-cues strictly depended on stimulus visibility. While retro-cues provided no benefit under near-threshold conditions, an alerting cue presented 200 ms after target offset significantly increased the proportion of seen targets when target visibility was higher. This suggests that a sufficiently robust sensory trace can be retrospectively rescued or promoted into awareness by a late alerting boost, and that pure alerting retro-cues are able to modulate conscious perception even when no spatial features are involved. These findings demonstrate a decoupling of stimulus onset from the timing of conscious access, providing a behavioural platform to arbitrate between competing models of consciousness such as the Global Neuronal Workspace Theory and the phenomenal/access distinction of consciousness.
Koss, C.; Blanke, J.-H.; de la Cuesta-Ferrer, L.; Jakel, F.; Stuttgen, M. C.
Show abstract
Signal detection theory posits that subjects in two-stimulus, two-choice discrimination tasks decide by comparing random samples of an evidence variable to a static decision criterion. While the core assumptions of the theory have received ample experimental support, it has become evident that the decision criterion is not static but subject to trial-by-trial fluctuations and can be influenced by experimental manipulations. The mechanisms governing the trial-by-trial criterion changes are however not well understood. Here, we report results from five experiments in which we subjected rats to a two-stimulus, two-choice auditory discrimination task. In the first three experiments, we investigated the effects of stimulus presentation ratios and reward ratios and provide clear evidence that the effects of changing reward ratios are more pronounced than those of stimulus presentation ratios. A model-based analysis revealed that this effect was due to more than tenfold higher learning rates when reward ratios were manipulated. In two separate experiments, we investigated the effect of reward density (i.e., global reward rate) on criterion learning but failed to find consistent effects. A systematic comparison of three different trial-by-trial criterion learning models based on detection theory, the matching law, and reinforcement learning showed that no model was able to capture the differential effects of stimulus presentation and reward ratios. We conclude that subjects explicitly represent either prior stimulus probabilities or entire stimulus distributions, and accordingly future models need to represent these factors as well.
Li, L.; Landy, M. S.
Show abstract
Sensory representations are inherently noisy, and monitoring this noise is essential for effective decision-making. This metacognitive ability of evaluating the quality of ones perceptual decision is referred to as perceptual confidence. However, whether perceptual confidence accurately tracks internal noise remains unresolved. Peripheral vision provides a natural testing ground for this question, yet previous studies report mixed results complicated by different definitions and measurements of confidence. Here, we used a normative Bayesian framework with incentivized confidence measurements to address these discrepancies. We tested the Bayesian-confidence hypothesis that confidence is derived from the posterior probability distribution of the feature being judged, given noisy sensory measurements. We tested two perceptual tasks while varying stimulus eccentricity: spatial localization and orientation estimation. We measured confidence by post-decision wagering, by which participants set a symmetrical range around the perceptual estimates. Participants earned higher reward for narrower confidence ranges but received zero reward if the range did not enclose the target. We estimated sensory noise from the perceptual responses to predict confidence, assuming that sensory noise linearly increases with eccentricity. We then compared a normative Bayesian model with three alternative models that challenged different assumptions. Across both tasks, the Bayesian ideal-observer model best predicted confidence. These results suggest that humans can accurately monitor the increased internal noise in peripheral vision and use this information to make optimal confidence judgments.
Hayes, H. R.; Campagnoli, C.
Show abstract
Virtual Reality (VR) applications depend on eliciting spatial presence, the subjective experience of being physically located within a virtual environment. Although individual differences have long been theorised to contribute to this experience, their role in highly immersive VR systems remains contested. The present study investigated whether trait absorption predicts spatial presence and whether this relationship is mediated by attention allocation. Seventy participants (44 female, 26 male; M age = 22.90, SD = 4.88) completed a 6-minute VR session using a Meta Quest 3 Head-Mounted Display and validated self-report measures of trait absorption (Tellegen Absorption Scale), attention allocation, and spatial presence (MEC-Spatial Presence Questionnaire). Path analysis confirmed a significant, complete mediation pathway: trait absorption positively predicted attention allocation ({beta} = 0.27, p = .013), which in turn strongly predicted spatial presence ({beta} = 0.54, p < .001). The direct path from absorption to spatial presence was non-significant ({beta} = 0.11, p = .325), indicating complete mediation. The indirect effect was significant ({beta} = 0.15; 95% BCa CI [0.025, 0.291]). The model explained a sizeable 33.8% of the variance in spatial presence (Cohens f{superscript 2} = 0.51). Post-hoc dose-response analysis revealed that trait absorption acts as a cognitive amplifier: the strength of the attention-presence relationship tripled from low-absorption ({beta} = 0.33, R{superscript 2} = .15) to high-absorption individuals ({beta} = 1.00, R{superscript 2} = .56). These findings demonstrate that individual differences remain important in highly immersive VR by modulating the effectiveness of attentional focus, offering promising directions for tailoring VR interventions.
Benzaquen, E.; Griffiths, T. D.; Kumar, S.
Show abstract
Prior expectations are known to shape perception especially when a stimulus is ambiguous. Bayesian models of cognition posit perception is a precision-weighted combination of top-down and bottom-up information. We consider here affective responses to highly salient stimuli for which a dominant role of bottom-up processing has previously been emphasised. We study how predictions alter the perception of emotional stimuli in a paradigm in which neutral and aversive sounds were preceded by either predictive or non-predictive cues. Cues predicted the type of sound with 100% or 50% probability. Behavioural measures of trial-by-trial expectation and perceived aversiveness were collected before and after stimulus presentation, respectively. We show that prior expectations biased the perceived aversiveness of sounds towards predictions, but only when subjective expectations were considered (as opposed to the objective expectation based on conditional probability). Neural responses were recorded using EEG. During sound processing, we found P3 and LPP components were increased after non-predictive cues, but only for affective stimuli. Time-Frequency results uncovered a role of alpha-beta oscillations in the precision of predictions, as well as in the processing of unexpected stimuli. Our results indicate expectations directly alter the perception of affective stimuli and its processing, and emphasise the importance of behavioural measures to characterize this relationship.
Ruffino, C.; Jacquet, T.; Lepers, R.; Papaxanthis, C.; Truong, C.
Show abstract
Mental fatigue is known to impair cognitive and motor performance, but its impact on motor learning remains unclear. This study examined how mental fatigue affects skill acquisition in a sequential finger-tapping task. Twenty-eight participants were assigned to either a mental fatigue group, which completed a thirty-minute Stroop task, or a control group, which watched a documentary of equivalent duration. Both groups then trained on the finger-tapping task across multiple practice blocks with brief rest periods. Overall motor skill improved similarly in both groups. However, mental fatigue altered the pattern of acquisition: participants in the fatigue group showed decreased performance during practice blocks, which was compensated by larger gains during inter-block rest periods. A strong negative correlation was observed between online decrements and offline improvements, indicating that greater declines during practice were associated with larger gains during rest. This study highlights the critical role of rest periods in maintaining learning under cognitively demanding conditions and provides insight into how internal states, such as mental fatigue, can selectively influence the expression of performance without compromising overall learning.
Völler, J.; Linde-Domingo, J.; Gonzalez-Garcia, C.
Show abstract
Suddenly finding the solution to a problem after a period of impasse often comes with a feeling of insight. This subjective experience is proposed to arise as a consequence of prediction errors. Accordingly, previous studies have revealed that more incorrect initial predictions result in more intense insights. Crucially however, prominent models of Bayesian inference suggest levels of computationally-defined surprise are not a simple feature of distance between predictions and inputs, but also their precision or certainty. Yet, how these two factors interact to give rise to insight experiences remains unknown. In this pre-registered study, participants were exposed to ambiguous images while they tried to guess the correct label of the image (to derive prediction accuracy) and rated their confidence in that label (for prediction uncertainty). We then measured the intensity of their insight when a solution was given. As predicted, we found that the intensity of insight was a result of both the prediction accuracy and the uncertainty awarded to it. More specifically, when initial predictions were far from the true label, those made with lower confidence induced weaker insights, while the opposite pattern was observed when predictions were closer to the reality. Trial-by-trial estimations of prediction errors from participants responses closely mirrored insight ratings. Finally, we analysed data from two additional independent datasets with different modalities and setups and replicated the interaction between prediction accuracy and uncertainty on the intensity of insight. Altogether, these findings suggest that insight experiences are read out from prediction errors and highlight the key role of uncertainty in characterising this relationship.
Logie, M.; Grasso, C.; van Wassenhove, V.
Show abstract
How does the structure of events influence the when and the where of experience in comparison to the what? We developed a novel virtual reality (VR) environment to understand how the quantity of information within nested structures influence participants memory for events. Participants moved through a series of virtual rooms (events) where images (items) appeared in randomised locations on a 3 by 3 grid located on a wall. Participants were asked to remember the what (old/new), when (timeline location), and where (grid location), of the images they experienced. Two types of nested events were tested (6 rooms, each containing 4 images; 3 rooms, each containing 8 images) without a difference in the number of seconds of presentation. We found a strong temporal compression effect at nested levels in which participants remembered early items and events happening later, and later items and events happening earlier, than the original experience. Crucially, presenting four-item events resulted in a greater compression rate than eight-item events. We also found greater temporal distances between pairs of items occurring within eight-item events than pairs of items which occurred on either side of a boundary. Memory for when depends on the compression of information within events.